Learning Individualized Hyperparameter Settings

نویسندگان

چکیده

The performance of optimization algorithms, and consequently AI/machine learning solutions, is strongly influenced by the setting their hyperparameters. Over last decades, a rich literature has developed proposing methods to automatically determine parameter for problem interest, aiming at either robust or instance-specific settings. Robust already mature area research, while instance-level still in its infancy, with contributions mainly dealing algorithm selection. work reported this paper belongs latter category, exploiting generalization capabilities artificial neural networks adapt general generated state-of-the-art automatic configurators. Our approach differs significantly from analogous ones literature, both because we rely on systems suggest settings, propose novel scheme which different outputs are proposed each input, order support examples. was validated two algorithms that optimized instances problems. We used an very sensitive applied generalized assignment instances, tabu search purportedly little quadratic instances. computational results cases attest effectiveness approach, especially when structurally those previously encountered.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hyperparameter Search in Machine Learning

Abstract We describe the hyperparameter search problem in the field of machine learning and discuss its main challenges from an optimization perspective. Machine learning methods attempt to build models that capture some element of interest based on given data. Most common learning algorithms feature a set of hyperparameters that must be determined before training commences. The choice of hyper...

متن کامل

Initializing Bayesian Hyperparameter Optimization via Meta-Learning

Model selection and hyperparameter optimization is crucial in applying machine learning to a novel dataset. Recently, a subcommunity of machine learning has focused on solving this problem with Sequential Model-based Bayesian Optimization (SMBO), demonstrating substantial successes in many applications. However, for computationally expensive algorithms the overhead of hyperparameter optimizatio...

متن کامل

Hyperparameter learning in probabilistic prototype-based models

We present two approaches to extend Robust Soft Learning Vector Quantization (RSLVQ). This algorithm for nearest prototype classification is derived from an explicit cost function and follows the dynamics of a stochastic gradient ascent. The RSLVQ cost function is defined in terms of a likelihood ratio and involves a hyperparameter which is kept constant during training. We propose to adapt the...

متن کامل

Learning to Warm-Start Bayesian Hyperparameter Optimization

Hyperparameter optimization undergoes extensive evaluations of validation errors in order to find the best configuration of hyperparameters. Bayesian optimization is now popular for hyperparameter optimization, since it reduces the number of validation error evaluations required. Suppose that we are given a collection of datasets on which hyperparameters are already tuned by either humans with ...

متن کامل

Gradient-based Hyperparameter Optimization through Reversible Learning

Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Algorithms

سال: 2023

ISSN: ['1999-4893']

DOI: https://doi.org/10.3390/a16060267